Robust Image Analysis by L1-Norm Semi-supervised Learning
نویسندگان
چکیده
This paper presents a novel L1-norm semisupervised learning algorithm for robust image analysis by giving new L1-norm formulation of Laplacian regularization which is the key step of graph-based semi-supervised learning. Since our L1-norm Laplacian regularization is defined directly over the eigenvectors of the normalized Laplacian matrix, we successfully formulate semi-supervised learning as an L1-norm linear reconstruction problem which can be effectively solved with sparse coding. By working with only a small subset of eigenvectors, we further develop a fast sparse coding algorithm for our L1-norm semi-supervised learning. Due to the sparsity induced by sparse coding, the proposed algorithm can deal with the noise in the data to some extent and thus has important applications to robust image analysis, such as noise-robust image classification and noise reduction for visual and textual bag-ofwords (BOW) models. In particular, this paper is the first attempt to obtain robust image representation by sparse co-refinement of visual and textual BOW models. The experimental results have shown the promising performance of the proposed algorithm.
منابع مشابه
Noise-Robust Semi-Supervised Learning by Large-Scale Sparse Coding
This paper presents a large-scale sparse coding algorithm to deal with the challenging problem of noiserobust semi-supervised learning over very large data with only few noisy initial labels. By giving an L1-norm formulation of Laplacian regularization directly based upon the manifold structure of the data, we transform noise-robust semi-supervised learning into a generalized sparse coding prob...
متن کاملLearning With ℓ1-Graph for Image Analysis
The graph construction procedure essentially determines the potentials of those graph-oriented learning algorithms for image analysis. In this paper, we propose a process to build the so-called directed l1-graph, in which the vertices involve all the samples and the ingoing edge weights to each vertex describe its l1-norm driven reconstruction from the remaining samples and the noise. Then, a s...
متن کاملAdaptive Loss Minimization for Semi-Supervised Elastic Embedding
The semi-supervised learning usually only predict labels for unlabeled data appearing in training data, and cannot effectively predict labels for testing data never appearing in training set. To handle this outof-sample problem, many inductive methods make a constraint such that the predicted label matrix should be exactly equal to a linear model. In practice, this constraint is too rigid to ca...
متن کاملCollective Kernel Construction in Noisy Environment
Kernels are similarity functions, and play important roles in machine learning. Traditional kernels are built directly from the feature vectors of data instances xi, xj . However, data could be noisy, and there are missing values or corrupted values in feature vectors. In this paper, we propose a new approach to build kernel Collective Kernel, especially from noisy data. We also derive an effic...
متن کاملUnsupervised Total Variation Loss for Semi-supervised Deep Learning of Semantic Segmentation
We introduce a novel unsupervised loss function for learning semantic segmentation with deep convolutional neural nets (ConvNet) when densely labeled training images are not available. More specifically, the proposed loss function penalizes the L1-norm of the gradient of the label probability vector image , i.e. total variation, produced by the ConvNet. This can be seen as a regularization term...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- CoRR
دوره abs/1110.3109 شماره
صفحات -
تاریخ انتشار 2011